post-hoc explanation
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Asia > Singapore (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis
This work studies the evaluation of explaining graph neural networks (GNNs), which is crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation metrics, and even explanation methods -- which mainly follow the paradigm of feeding the explanatory subgraph and measuring output difference -- always suffer from the notorious out-of-distribution (OOD) issue.
Do Sparse Subnetworks Exhibit Cognitively Aligned Attention? Effects of Pruning on Saliency Map Fidelity, Sparsity, and Concept Coherence
Suwal, Sanish, Bhusal, Dipkamal, Clifford, Michael, Rastogi, Nidhi
Prior works have shown that neural networks can be heavily pruned while preserving performance, but the impact of pruning on model interpretability remains unclear. In this work, we investigate how magnitude-based pruning followed by fine-tuning affects both low-level saliency maps and high-level concept representations. Using a ResNet-18 trained on ImageNette, we compare post-hoc explanations from Vanilla Gradients (VG) and Integrated Gradients (IG) across pruning levels, evaluating sparsity and faithfulness. We further apply CRAFT-based concept extraction to track changes in semantic coherence of learned concepts. Our results show that light-to-moderate pruning improves saliency-map focus and faithfulness while retaining distinct, semantically meaningful concepts. In contrast, aggressive pruning merges heterogeneous features, reducing saliency map sparsity and concept coherence despite maintaining accuracy. These findings suggest that while pruning can shape internal representations toward more human-aligned attention patterns, excessive pruning undermines interpretability.
- North America > United States > New York > Monroe County > Rochester (0.05)
- North America > United States > California > Santa Clara County > Mountain View (0.04)
- Europe > United Kingdom > England > Tyne and Wear > Newcastle (0.04)
Explanation-based Data Augmentation for Image Classification
All the datasets used in our paper are publicly available and are to be used for research purposes. Table 1 gives the download links and licenses of these datasets. Use is restricted to non-commercial research and educational purposesCUB-Families (2) https://github.com/HCPLab-SYSU/HS Use is restricted to non-commercial research and educational purposesTiny ImageNet http://cs231n.stanford.edu/tiny-imagenet-200.zip Use is restricted to non-commercial research and educational purposes Cardinal Cerulean Warbler Least Auklet Figure 1: Samples images for three classes of CUB dataset collected in (3). Abacus Arabian Camel Wooden Spoon Figure 2: Samples images for three classes of Tiny-ImageNet dataset collected using Flickr API.
- North America > United States > California > Santa Clara County > Palo Alto (0.24)
- Asia > Singapore (0.04)
- Asia > China > Shanghai > Shanghai (0.04)
Unifying Post-hoc Explanations of Knowledge Graph Completions
Lonardi, Alessandro, Badreddine, Samy, Besold, Tarek R., Martin, Pablo Sanchez
Post-hoc explainability for Knowledge Graph Completion (KGC) lacks formalization and consistent evaluations, hindering reproducibility and cross-study comparisons. This paper argues for a unified approach to post-hoc explainability in KGC. First, we propose a general framework to characterize post-hoc explanations via multi-objective optimization, balancing their effectiveness and conciseness. This unifies existing post-hoc explainability algorithms in KGC and the explanations they produce. Next, we suggest and empirically support improved evaluation protocols using popular metrics like Mean Reciprocal Rank and Hits@ k . Finally, we stress the importance of interpretability as the ability of explanations to address queries meaningful to end-users. By unifying methods and refining evaluation standards, this work aims to make research in KGC explainability more reproducible and impactful.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California (0.14)
- Europe > France > Île-de-France (0.04)
- (9 more...)
- Research Report (1.00)
- Overview (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.66)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.62)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (0.47)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.46)
In defence of post-hoc explanations in medical AI
Hatherley, Joshua, Munch, Lauritz, Bjerring, Jens Christian
Since the early days of the Explainable AI movement, post-hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post-hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this article, we aim to defend the value of post-hoc explanations against this recent critique. We argue that even if post-hoc explanations do not replicate the exact reasoning processes of black box systems, they can still improve users' functional understanding of black box systems, increase the accuracy of clinician-AI teams, and assist clinicians in justifying their AI-informed decisions. While post-hoc explanations are not a "silver bullet" solution to the black box problem in medical AI, we conclude that they remain a useful strategy for addressing the black box problem in medical AI.
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Asia > Malaysia (0.04)
Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis
This work studies the evaluation of explaining graph neural networks (GNNs), which is crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation metrics, and even explanation methods -- which mainly follow the paradigm of feeding the explanatory subgraph and measuring output difference -- always suffer from the notorious out-of-distribution (OOD) issue. In this work, we endeavor to confront the issue by introducing a novel evaluation metric, termed OOD-resistant Adversarial Robustness (OAR). Specifically, we draw inspiration from the notion of adversarial robustness and evaluate post-hoc explanation subgraphs by calculating their robustness under attack. On top of that, an elaborate OOD reweighting block is inserted into the pipeline to confine the evaluation process to the original data distribution.
In Defence of Post-hoc Explainability
The widespread adoption of machine learning in scientific research has created a fundamental tension between model opacity and scientific understanding. Whilst some advocate for intrinsically interpretable models, we introduce Computational Interpretabilism (CI) as a philosophical framework for post-hoc interpretability in scientific AI. Drawing parallels with human expertise, where post-hoc rationalisation coexists with reliable performance, CI establishes that scientific knowledge emerges through structured model interpretation when properly bounded by empirical validation. Through mediated understanding and bounded factivity, we demonstrate how post-hoc methods achieve epistemically justified insights without requiring complete mechanical transparency, resolving tensions between model complexity and scientific comprehension.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Law (0.68)
- Health & Medicine (0.67)